Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Sparse Activated Mixture of Experts
# Sparse Activated Mixture of Experts
Moe LLaVA StableLM 1.6B 4e
Apache-2.0
MoE-LLaVA is a large-scale vision-language model based on a mixture of experts architecture, achieving efficient multimodal learning through sparsely activated parameters.
Text-to-Image
Transformers
M
LanguageBind
125
8
Featured Recommended AI Models
Empowering the Future, Your AI Solution Knowledge Base
English
简体中文
繁體中文
にほんご
© 2025
AIbase